Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.902
Filtrar
1.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38717201

RESUMO

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Assuntos
Sinais (Psicologia) , Mascaramento Perceptivo , Localização de Som , Inteligibilidade da Fala , Percepção da Fala , Humanos , Feminino , Masculino , Adulto Jovem , Adulto , Percepção da Fala/fisiologia , Estimulação Acústica , Limiar Auditivo , Acústica da Fala , Teste do Limiar de Recepção da Fala , Ruído
2.
PeerJ ; 12: e17104, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38680894

RESUMO

Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.


Assuntos
Implantes Cocleares , Sinais (Psicologia) , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Estimulação Acústica/métodos , Localização de Som/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Fatores de Tempo
3.
Nat Commun ; 15(1): 3116, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600132

RESUMO

Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.


Assuntos
Córtex Auditivo , Localização de Som , Córtex Visual , Percepção Visual/fisiologia , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Córtex Visual/fisiologia , Estimulação Luminosa/métodos , Estimulação Acústica/métodos
4.
Artigo em Chinês | MEDLINE | ID: mdl-38561257

RESUMO

Objective: This study investigates the effect of signal-to-noise ratio (SNR), frequency, and bandwidth on horizontal sound localization accuracy in normal-hearing young adults. Methods: From August 2022 to December 2022, a total of 20 normal-hearing young adults, including 7 males and 13 females, with an age range of 20 to 35 years and a mean age of 25.4 years, were selected to participate in horizontal azimuth recognition tests under both quiet and noisy conditions. Six narrowband filtered noise stimuli were used with central frequencies (CF) of 250, 2 000, and 4 000 Hz and bandwidths of 1/6 and 1 octave. Continuous broadband white noise was used as the background masker, and the signal-to-noise ratio (SNR) was 0, -3, and -12 dB. The root-mean-square error (RMS error) was used to measure sound localization accuracy, with smaller values indicating higher accuracy. Friedman test was used to compare the effects of SNR and CF on sound localization accuracy, and Wilcoxon signed-rank test was used to compare the impact of the two bandwidths on sound localization accuracy in noise. Results: In a quiet environment, the RMS error in horizontal azimuth in normal-hearing young adults ranged from 4.3 to 8.1 degrees. Sound localization accuracy decreased with decreasing SNR: at 0 dB SNR (range: 5.3-12.9 degrees), the difference from the quiet condition was not significant (P>0.05); however, at -3 dB (range: 7.3-16.8 degrees) and -12 dB SNR (range: 9.4-41.2 degrees), sound localization accuracy significantly decreased compared to the quiet condition (all P<0.01). Under noisy conditions, there were differences in sound localization accuracy among stimuli with different frequencies and bandwidths, with higher frequencies performing the worst, followed by middle frequencies, and lower frequencies performing the best, with significant differences (all P<0.01). Sound localization accuracy for 1/6 octave stimuli was more susceptible to noise interference than 1 octave stimuli (all P<0.01). Conclusions: The ability of normal-hearing young adults to localize sound in the horizontal plane in the presence of noise is influenced by SNR, CF, and bandwidth. Noise with SNRs of ≥-3 dB can lead to decreased accuracy in narrowband sound localization. Higher CF signals and narrower bandwidths are more susceptible to noise interference.


Assuntos
Localização de Som , Percepção da Fala , Masculino , Feminino , Humanos , Adulto Jovem , Adulto , Ruído , Razão Sinal-Ruído , Audição
5.
J Acoust Soc Am ; 155(4): 2460-2469, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38578178

RESUMO

Head-worn devices (HWDs) interfere with the natural transmission of sound from the source to the ears of the listener, worsening their localization abilities. The localization errors introduced by HWDs have been mostly studied in static scenarios, but these errors are reduced if head movements are allowed. We studied the effect of 12 HWDs on an auditory-cued visual search task, where head movements were not restricted. In this task, a visual target had to be identified in a three-dimensional space with the help of an acoustic stimulus emitted from the same location as the visual target. The results showed an increase in the search time caused by the HWDs. Acoustic measurements of a dummy head wearing the studied HWDs showed evidence of impaired localization cues, which were used to estimate the perceived localization errors using computational auditory models of static localization. These models were able to explain the search-time differences in the perceptual task, showing the influence of quadrant errors in the auditory-aided visual search task. These results indicate that HWDs have an impact on sound-source localization even when head movements are possible, which may compromise the safety and the quality of experience of the wearer.


Assuntos
Auxiliares de Audição , Localização de Som , Estimulação Acústica , Movimentos da Cabeça
6.
PLoS Biol ; 22(4): e3002586, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38683852

RESUMO

Having two ears enables us to localize sound sources by exploiting interaural time differences (ITDs) in sound arrival. Principal neurons of the medial superior olive (MSO) are sensitive to ITD, and each MSO neuron responds optimally to a best ITD (bITD). In many cells, especially those tuned to low sound frequencies, these bITDs correspond to ITDs for which the contralateral ear leads, and are often larger than the ecologically relevant range, defined by the ratio of the interaural distance and the speed of sound. Using in vivo recordings in gerbils, we found that shortly after hearing onset the bITDs were even more contralaterally leading than found in adult gerbils, and travel latencies for contralateral sound-evoked activity clearly exceeded those for ipsilateral sounds. During the following weeks, both these latencies and their interaural difference decreased. A computational model indicated that spike timing-dependent plasticity can underlie this fine-tuning. Our results suggest that MSO neurons start out with a strong predisposition toward contralateral sounds due to their longer neural travel latencies, but that, especially in high-frequency neurons, this predisposition is subsequently mitigated by differential developmental fine-tuning of the travel latencies.


Assuntos
Estimulação Acústica , Gerbillinae , Neurônios , Complexo Olivar Superior , Animais , Neurônios/fisiologia , Complexo Olivar Superior/fisiologia , Localização de Som/fisiologia , Masculino , Núcleo Olivar/fisiologia , Som , Feminino
7.
Otol Neurotol ; 45(5): 482-488, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38530367

RESUMO

OBJECTIVE: Severely asymmetrical hearing loss (SAHL) is characterized by a moderately severe or severe hearing loss in one side and normal or mildly impaired controlateral hearing in the other. The Active tri-CROS combines the Contralateral Routing-of-Signal System (CROS, or BiCROS if the best ear is stimulated) and the stimulation of the worst ear by an in-the-canal hearing aid. This study aims to evaluate the benefit of the Active tri-CROS for SAHL patients. STUDY DESIGN: This retrospective study was conducted from September 2019 to December 2020. SETTING: Ambulatory, tertiary care. PATIENTS: Patients were retrospectively included if they had received the Active tri-CROS system after having used a CROS or BiCROS system for SAHL for at least 3 years. MAIN OUTCOME MEASURES: Audiometric gain, signal-to-noise ratio, spatial localization, and the Abbreviated Profile of Hearing Aid Benefit and Tinnitus Handicap Inventory questionnaires were performed before equipment and after a month with the system. RESULTS: Twenty patients (mean, 62 yr old) with a mean of 74.3 ± 8.7 dB HL on the worst ear were included. The mean tonal hearing gain on the worst ear was 20 ± 6 dB. Signal-to-noise ratio significantly rose from 1.43 ± 3.9 to 0.16 ± 3.4 dB ( p = 0.0001). Spatial localization was not significantly improved. The mean Tinnitus Handicap Inventory test score of the eight patients suffering from tinnitus rose from 45.5 ± 18.5 to 31 ± 25.2 ( p = 0.016). CONCLUSIONS: The Active tri-CROS system is a promising new therapeutically solution for SAHL.


Assuntos
Auxiliares de Audição , Humanos , Pessoa de Meia-Idade , Estudos Retrospectivos , Masculino , Feminino , Idoso , Adulto , Perda Auditiva Unilateral/reabilitação , Perda Auditiva Unilateral/fisiopatologia , Localização de Som/fisiologia , Zumbido/terapia , Zumbido/fisiopatologia
8.
Trends Hear ; 28: 23312165241229880, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38545645

RESUMO

Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Audição , Localização de Som/fisiologia
9.
Trends Hear ; 28: 23312165241235463, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38425297

RESUMO

Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.


Assuntos
Transtornos da Percepção Auditiva , Localização de Som , Realidade Virtual , Adulto , Humanos , Testes Auditivos
10.
Otol Neurotol ; 45(4): 392-397, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38478407

RESUMO

OBJECTIVE: To assess cochlear implant (CI) sound processor usage over time in children with single-sided deafness (SSD) and identify factors influencing device use. STUDY DESIGN: Retrospective, chart review study. SETTING: Pediatric tertiary referral center. PATIENTS: Children with SSD who received CI between 2014 and 2020. OUTCOME MEASURE: Primary outcome was average daily CI sound processor usage over follow-up. RESULTS: Fifteen children with SSD who underwent CI surgery were categorized based on age of diagnosis and surgery timing. Over an average of 4.3-year follow-up, patients averaged 4.6 hours/day of CI usage. Declining usage trends were noted over time, with the first 2 years postactivation showing higher rates. No significant usage differences emerged based on age, surgery timing, or hearing loss etiology. CONCLUSIONS: Long-term usage decline necessitates further research into barriers and enablers for continued CI use in pediatric SSD cases.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Perda Auditiva Unilateral , Localização de Som , Percepção da Fala , Humanos , Criança , Implantes Cocleares/efeitos adversos , Estudos Retrospectivos , Perda Auditiva Unilateral/cirurgia , Perda Auditiva Unilateral/reabilitação , Localização de Som/fisiologia , Surdez/cirurgia , Surdez/reabilitação , Percepção da Fala/fisiologia , Resultado do Tratamento
11.
Neuropsychologia ; 196: 108822, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38342179

RESUMO

Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants' responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted.


Assuntos
Localização de Som , Animais , Humanos , Localização de Som/fisiologia , Cegueira , Ruído , Acústica , Sinais (Psicologia) , Mascaramento Perceptivo , Estimulação Acústica/métodos
12.
Trends Hear ; 28: 23312165241229572, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38347733

RESUMO

Subjective reports indicate that hearing aids can disrupt sound externalization and/or reduce the perceived distance of sounds. Here we conducted an experiment to explore this phenomenon and to quantify how frequently it occurs for different hearing-aid styles. Of particular interest were the effects of microphone position (behind the ear vs. in the ear) and dome type (closed vs. open). Participants were young adults with normal hearing or with bilateral hearing loss, who were fitted with hearing aids that allowed variations in the microphone position and the dome type. They were seated in a large sound-treated booth and presented with monosyllabic words from loudspeakers at a distance of 1.5 m. Their task was to rate the perceived externalization of each word using a rating scale that ranged from 10 (at the loudspeaker in front) to 0 (in the head) to -10 (behind the listener). On average, compared to unaided listening, hearing aids tended to reduce perceived distance and lead to more in-the-head responses. This was especially true for closed domes in combination with behind-the-ear microphones. The behavioral data along with acoustical recordings made in the ear canals of a manikin suggest that increased low-frequency ear-canal levels (with closed domes) and ambiguous spatial cues (with behind-the-ear microphones) may both contribute to breakdowns of externalization.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Localização de Som , Percepção da Fala , Adulto Jovem , Humanos , Fala , Perda Auditiva Bilateral , Ruído , Percepção da Fala/fisiologia
13.
Trends Hear ; 28: 23312165241230947, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38361245

RESUMO

Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.


Assuntos
Localização de Som , Pessoas com Deficiência Visual , Humanos , Percepção Espacial/fisiologia , Cegueira/diagnóstico , Localização de Som/fisiologia , Acústica
14.
PLoS One ; 19(2): e0293811, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38394286

RESUMO

A hearing aid or a contralateral routing of signal device are options for unilateral cochlear implant listeners with limited hearing in the unimplanted ear; however, it is uncertain which device provides greater benefit beyond unilateral listening alone. Eighteen unilateral cochlear implant listeners participated in this prospective, within-participants, repeated measures study. Participants were tested with the cochlear implant alone, cochlear implant + hearing aid, and cochlear implant + contralateral routing of signal device configurations with a one-month take-home period between each in-person visit. Audiograms, speech perception in noise, and lateralization were evaluated. Subjective feedback was obtained via questionnaires. Marked improvement in speech in noise and non-implanted ear lateralization accuracy were observed with the addition of a contralateral hearing aid. There were no significant differences in speech recognition between listening configurations. However, the chronic device use questionnaires and the final device selection showed a clear preference for the hearing aid in spatial awareness and communication domains. Individuals with limited hearing in their unimplanted ears demonstrate significant improvement with the addition of a contralateral device. Subjective questionnaires somewhat contrast with clinic-based outcome measures, highlighting the delicate decision-making process involved in clinically advising one device or another to maximize communication benefits.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Localização de Som , Percepção da Fala , Humanos , Estudos Prospectivos , Audição
15.
Trends Hear ; 28: 23312165231217910, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38297817

RESUMO

The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (MAge = 12.9 years) and seventeen adults (MAge = 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.


Assuntos
Localização de Som , Adulto , Criança , Adolescente , Humanos , Movimentos Oculares , Audição , Testes Auditivos , Movimentos da Cabeça
16.
Eur J Neurosci ; 59(9): 2373-2390, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38303554

RESUMO

Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.


Assuntos
Localização de Som , Percepção Visual , Humanos , Feminino , Adulto , Masculino , Percepção Visual/fisiologia , Localização de Som/fisiologia , Adulto Jovem , Movimentos Sacádicos/fisiologia , Percepção Auditiva/fisiologia , Perda Auditiva/fisiopatologia , Estimulação Luminosa/métodos , Estimulação Acústica/métodos , Percepção Espacial/fisiologia
17.
Cogn Res Princ Implic ; 9(1): 4, 2024 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-38191869

RESUMO

Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.


Assuntos
Movimentos da Cabeça , Localização de Som , Humanos , Som , Comportamento Exploratório , Processos Mentais
18.
Artigo em Inglês | MEDLINE | ID: mdl-38227005

RESUMO

The Journal of Comparative Physiology lived up to its name in the last 100 years by including more than 1500 different taxa in almost 10,000 publications. Seventeen phyla of the animal kingdom were represented. The honeybee (Apis mellifera) is the taxon with most publications, followed by locust (Locusta migratoria), crayfishes (Cambarus spp.), and fruitfly (Drosophila melanogaster). The representation of species in this journal in the past, thus, differs much from the 13 model systems as named by the National Institutes of Health (USA). We mention major accomplishments of research on species with specific adaptations, specialist animals, for example, the quantitative description of the processes underlying the axon potential in squid (Loligo forbesii) and the isolation of the first receptor channel in the electric eel (Electrophorus electricus) and electric ray (Torpedo spp.). Future neuroethological work should make the recent genetic and technological developments available for specialist animals. There are many research questions left that may be answered with high yield in specialists and some questions that can only be answered in specialists. Moreover, the adaptations of animals that occupy specific ecological niches often lend themselves to biomimetic applications. We go into some depth in explaining our thoughts in the research of motion vision in insects, sound localization in barn owls, and electroreception in weakly electric fish.


Assuntos
Peixe Elétrico , Localização de Som , Estrigiformes , Animais , Drosophila melanogaster , Localização de Som/fisiologia , Visão Ocular , Electrophorus
19.
Eur J Neurosci ; 59(7): 1770-1788, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38230578

RESUMO

Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.


Assuntos
Percepção Auditiva , Localização de Som , Estimulação Acústica , Estimulação Luminosa , Percepção Visual , Humanos
20.
PLoS One ; 19(1): e0296452, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38165991

RESUMO

To achieve human-like behaviour during speech interactions, it is necessary for a humanoid robot to estimate the location of a human talker. Here, we present a method to optimize the parameters used for the direction of arrival (DOA) estimation, while also considering real-time applications for human-robot interaction scenarios. This method is applied to binaural sound source localization framework on a humanoid robotic head. Real data is collected and annotated for this work. Optimizations are performed via a brute force method and a Bayesian model based method, results are validated and discussed, and effects on latency for real-time use are also explored.


Assuntos
Robótica , Localização de Som , Humanos , Teorema de Bayes , Acústica , Gravitação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...